Goto

Collaborating Authors

 review paper


AIssistant: An Agentic Approach for Human--AI Collaborative Scientific Work on Reviews and Perspectives in Machine Learning

Gaddipati, Sasi Kiran, Keya, Farhana, Rabby, Gollam, Auer, Sören

arXiv.org Artificial Intelligence

Advances in AI-assisted research have introduced powerful tools for literature retrieval, hypothesis generation, experimentation, and manuscript preparation. However, systems remain fragmented and lack human-centred workflows. To address these gaps, we introduce AIssistant, an agentic, open-source Human-AI collaborative framework designed to simplify the end-to-end creation of scientific workflows. Since our development is still in an early stage, we present here the first experiments with AIssistant for perspective and review research papers in machine learning. Our system integrates modular tools and agents for literature synthesis, section-wise experimentation, citation management, and automatic LaTeX paper text generation, while maintaining human oversight at every stage to ensure accuracy, coherence, and scholarly rigour. We conducted a comprehensive evaluation across three layers: (1) Independent Human Review, following NeurIPS double-blind standards; (2) Automated LLM Review, using GPT-5 as a scalable human review proxy; and (3) Program Chair Oversight, where the chair monitors the entire review process and makes final validation and acceptance decisions. The results demonstrate that AIssistant improves drafting efficiency and thematic consistency. Nonetheless, Human-AI collaboration remains essential for maintaining factual correctness, methodological soundness, and ethical compliance. Despite its effectiveness, we identify key limitations, including hallucinated citations, difficulty adapting to dynamic paper structures, and incomplete integration of multimodal content.


Survey on Vision-Language-Action Models

Adilkhanov, Adilzhan, Yelenov, Amir, Seitzhanov, Assylkhan, Mazhitov, Ayan, Abdikarimov, Azamat, Sandykbayeva, Danissa, Kenzhebek, Daryn, Mukashev, Dinmukhammed, Umurbekov, Ilyas, Chumakov, Jabrail, Spanova, Kamila, Burunchina, Karina, Yergibay, Madina, Issa, Margulan, Zabirova, Moldir, Zhuzbay, Nurdaulet, Kabdyshev, Nurlan, Zhaniyar, Nurlan, Yermagambet, Rasul, Chibar, Rustam, Seitzhan, Saltanat, Khajikhanov, Soibkhon, Taunyazov, Tasbolat, Galimzhanov, Temirlan, Kaiyrbay, Temirlan, Mussin, Tleukhan, Syrymova, Togzhan, Kostyukova, Valeriya, Massalim, Yerkebulan, Kassym, Yermakhan, Nurbayeva, Zerde, Kappassov, Zhanat

arXiv.org Artificial Intelligence

This paper presents an AI-generated review of Vision-Language-Action (VLA) models, summarizing key methodologies, findings, and future directions. The content is produced using large language models (LLMs) and is intended only for demonstration purposes. This work does not represent original research, but highlights how AI can help automate literature reviews. As AI-generated content becomes more prevalent, ensuring accuracy, reliability, and proper synthesis remains a challenge. Future research will focus on developing a structured framework for AI-assisted literature reviews, exploring techniques to enhance citation accuracy, source credibility, and contextual understanding. By examining the potential and limitations of LLM in academic writing, this study aims to contribute to the broader discussion of integrating AI into research workflows. This work serves as a preliminary step toward establishing systematic approaches for leveraging AI in literature review generation, making academic knowledge synthesis more efficient and scalable.


HiReview: Hierarchical Taxonomy-Driven Automatic Literature Review Generation

Hu, Yuntong, Li, Zhuofeng, Zhang, Zheng, Ling, Chen, Kanjiani, Raasikh, Zhao, Boxin, Zhao, Liang

arXiv.org Artificial Intelligence

In this work, we present HiReview, a novel framework for hierarchical taxonomy-driven automatic literature review generation. With the exponential growth of academic documents, manual literature reviews have become increasingly labor-intensive and time-consuming, while traditional summarization models struggle to generate comprehensive document reviews effectively. Large language models (LLMs), with their powerful text processing capabilities, offer a potential solution; however, research on incorporating LLMs for automatic document generation remains limited. To address key challenges in large-scale automatic literature review generation (LRG), we propose a two-stage taxonomy-then-generation approach that combines graph-based hierarchical clustering with retrieval-augmented LLMs. First, we retrieve the most relevant sub-community within the citation network, then generate a hierarchical taxonomy tree by clustering papers based on both textual content and citation relationships. In the second stage, an LLM generates coherent and contextually accurate summaries for clusters or topics at each hierarchical level, ensuring comprehensive coverage and logical organization of the literature. Extensive experiments demonstrate that HiReview significantly outperforms state-of-the-art methods, achieving superior hierarchical organization, content relevance, and factual accuracy in automatic literature review generation tasks.


Refine Scientific Literature Searches using ChatGPT

#artificialintelligence

While being acutely aware of the potential harm involved in using Artificial Intelligence (AI), I can't help but be intrigued by the possibilities. After listening to academics and following ChatGPT fans on social media, it is no secret that ChatGPT is not a good option for finding the best peer-reviewed academic literature. ChatGPT will "hallucinate" articles that don't even exist. This AI weakness is a real cause for alarm. This weakness has the potential to perpetuate misinformation.


A Comprehensive Review of Digital Twin -- Part 1: Modeling and Twinning Enabling Technologies

Thelen, Adam, Zhang, Xiaoge, Fink, Olga, Lu, Yan, Ghosh, Sayan, Youn, Byeng D., Todd, Michael D., Mahadevan, Sankaran, Hu, Chao, Hu, Zhen

arXiv.org Artificial Intelligence

As an emerging technology in the era of Industry 4.0, digital twin is gaining unprecedented attention because of its promise to further optimize process design, quality control, health monitoring, decision and policy making, and more, by comprehensively modeling the physical world as a group of interconnected digital models. In a two-part series of papers, we examine the fundamental role of different modeling techniques, twinning enabling technologies, and uncertainty quantification and optimization methods commonly used in digital twins. This first paper presents a thorough literature review of digital twin trends across many disciplines currently pursuing this area of research. Then, digital twin modeling and twinning enabling technologies are further analyzed by classifying them into two main categories: physical-to-virtual, and virtual-to-physical, based on the direction in which data flows. Finally, this paper provides perspectives on the trajectory of digital twin technology over the next decade, and introduces a few emerging areas of research which will likely be of great use in future digital twin research. In part two of this review, the role of uncertainty quantification and optimization are discussed, a battery digital twin is demonstrated, and more perspectives on the future of digital twin are shared.


Automated detection of Alzheimer disease using MRI images and deep neural networks- A review

Singh, Narotam, D, Patteshwari., Soni, Neha, Kapoor, Amita

arXiv.org Artificial Intelligence

Early detection of Alzheimer disease is crucial for deploying interventions and slowing the disease progression. A lot of machine learning and deep learning algorithms have been explored in the past decade with the aim of building an automated detection for Alzheimer. Advancements in data augmentation techniques and advanced deep learning architectures have opened up new frontiers in this field, and research is moving at a rapid speed. Hence, the purpose of this survey is to provide an overview of recent research on deep learning models for Alzheimer disease diagnosis. In addition to categorizing the numerous data sources, neural network architectures, and commonly used assessment measures, we also classify implementation and reproducibility. Our objective is to assist interested researchers in keeping up with the newest developments and in reproducing earlier investigations as benchmarks. In addition, we also indicate future research directions for this topic.


Combating Collusion Rings is Hard but Possible

Boehmer, Niclas, Bredereck, Robert, Nichterlein, André

arXiv.org Artificial Intelligence

A recent report of Littmann [Commun. ACM '21] outlines the existence and the fatal impact of collusion rings in academic peer reviewing. We introduce and analyze the problem Cycle-Free Reviewing that aims at finding a review assignment without the following kind of collusion ring: A sequence of reviewers each reviewing a paper authored by the next reviewer in the sequence (with the last reviewer reviewing a paper of the first), thus creating a review cycle where each reviewer gives favorable reviews. As a result, all papers in that cycle have a high chance of acceptance independent of their respective scientific merit. We observe that review assignments computed using a standard Linear Programming approach typically admit many short review cycles. On the negative side, we show that Cycle-Free Reviewing is NP-hard in various restricted cases (i.e., when every author is qualified to review all papers and one wants to prevent that authors review each other's or their own papers or when every author has only one paper and is only qualified to review few papers). On the positive side, among others, we show that, in some realistic settings, an assignment without any review cycles of small length always exists. This result also gives rise to an efficient heuristic for computing (weighted) cycle-free review assignments, which we show to be of excellent quality in practice.


Review on COVID‐19 diagnosis models based on machine learning and deep learning approaches

#artificialintelligence

COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis.


Review Paper: PointNetGPD- Detecting Grasp Configuration from Point Sets

#artificialintelligence

In this post, I want to review a technique which works directly with point clouds to detect a grasp configuration. By grasp configuration, I mean the position and orientation of the gripper. The following picture shows a general overview of the approach. To summarize, the key contributions of this work are: • Proposing a network to evaluate the grasp quality by performing geometry analysis directly from a 3D point cloud based on the network architecture of PointNet. Compared with other CNN-based methods, this method can exploit the 3D geometry information in the depth image better without any hand-crafted features and sustain a relatively small amount of parameters for learning and inference efficiency.


Reviewing recent advancements in the development of neuro-inspired computing chips

#artificialintelligence

In recent years, many research teams worldwide have been developing computational techniques inspired by the human brain, such as deep learning algorithms. While some of these techniques are considered highly promising for a wide range of applications, conventional hardware does not always support their computational load and thus can limit their performance. A possible solution for overcoming the limitations of existing hardware and ensuring that brain-inspired computational techniques achieve optimal results entails the creation of new electronic components that better reflect the structure of the human brain. A class of neuro-inspired computing chips is specifically designed for artificial intelligence (AI) applications that mimic the neural structures in the brain of humans and other animals. Researchers at Tsinghua University in China have reviewed recent advancements in the design of neuro-inspired computing chips to gain insight on the progress made so far and identify challenges that still need to be overcome.